- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001000001000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Wang, Yu-Shuen (2)
-
Babu, Sabarish (1)
-
Chung, Chih-Han (1)
-
Hsieh, Ping-Chun (1)
-
Li, Tzu-Mao (1)
-
Lien, Yun-Hsuan (1)
-
Venkatakrishnan, Rohith (1)
-
Venkatakrishnan, Roshan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In offline reinforcement learning (RL), updating the value function with the discrete-time Bellman Equation often encounters challenges due to the limited scope of available data. This limitation stems from the Bellman Equation, which cannot accurately predict the value of unvisited states. To address this issue, we have introduced an innovative solution that bridges the continuousand discrete-time RL methods, capitalizing on their advantages. Our method uses a discrete-time RL algorithm to derive the value function from a dataset while ensuring that the function’s first derivative aligns with the local characteristics of states and actions, as defined by the HamiltonJacobi-Bellman equation in continuous RL. We provide practical algorithms for both deterministic policy gradient methods and stochastic policy gradient methods. Experiments on the D4RL dataset show that incorporating the first-order information significantly improves policy performance for offline RL problems.more » « less
-
Venkatakrishnan, Roshan; Venkatakrishnan, Rohith; Chung, Chih-Han; Wang, Yu-Shuen; Babu, Sabarish (, ACM Transactions on Applied Perception)Humans communicate by writing, often taking notes that assist thinking. With the growing popularity of collaborative Virtual Reality (VR) applications, it is imperative that we better understand aspects that affect writing in these virtual experiences. On-air writing in VR is a popular writing paradigm due to its simplicity in implementation without any explicit needs for specialized hardware. A host of factors can affect the efficacy of this writing paradigm and in this work, we delved into investigating the same. Along these lines, we investigated the effects of a combination of factors on users’ on-air writing performance, aiming to understand the circumstances under which users can both effectively and efficiently write in VR. We were interested in studying the effects of the following factors: (1) input modality: brush vs. near-field raycast vs. pointing gesture, (2) inking trigger method: haptic feedback vs. button based trigger, and (3) canvas geometry: plane vs. hemisphere. To evaluate the writing performance, we conducted an empirical evaluation with thirty participants, requiring them to write the words we indicated under different combinations of these factors. Dependent measures including the writing speed, accuracy rates, perceived workloads, and so on, were analyzed. Results revealed that the brush based input modality produced the best results in writing performance, that haptic feedback was not always effective over button based triggering, and that there are trade-offs associated with the different types of canvas geometries used. This work attempts at laying a foundation for future investigations that seek to understand and further improve the on-air writing experience in immersive virtual environments.more » « less
An official website of the United States government

Full Text Available